Skip to content

Initial separation of dparray into 2 parts. #184

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Dec 7, 2020
Merged

Initial separation of dparray into 2 parts. #184

merged 1 commit into from
Dec 7, 2020

Conversation

DrTodd13
Copy link
Contributor

This isn't complete since I wasn't able to get the file showing up as part of the package so I wasn't able to test much. Help needed!

@PokhodenkoSA
Copy link
Contributor

Need tests. @vlad-perevezentsev could write tests. @DrTodd13 could you provide an idea how to test this?
Possibly dpNP had tests for dparray.

@PokhodenkoSA
Copy link
Contributor

PokhodenkoSA commented Nov 30, 2020

This PR have an issue to fix: can not import dparray from dpctl.
I see warning:

>>> from dpctl import dparray
/localdisk/work/spokhode/miniconda3/envs/dpctl-dev/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: dpctl._sycl_core.SyclDevice size changed, may indicate binary incompatibility. Expected 56 from C header, got 88 from PyObject
  return f(*args, **kwds)

Also, have an errror:

>>> import dpctl, dpctl.memory, dpctl.dparray
>>> X = dpctl.dparray.ndarray((256, 4), dtype='d')
Segmentation fault (core dumped)

@PokhodenkoSA
Copy link
Contributor

@PokhodenkoSA
Copy link
Contributor

Related to #188

@oleksandr-pavlyk
Copy link
Contributor

oleksandr-pavlyk commented Nov 30, 2020

What's the best way to push changes to this branch?

Here is where it stands now, with proposed changes applied:

In [2]: import dpctl, dpctl.memory, dpctl.dparray

In [3]: X = dpctl.dparray.ndarray((256, 4), dtype='d')

In [4]: X.__sycl_usm_array_interface__
Out[4]: {}

In [5]: X.__array_interface__
Out[5]:
{'data': (93838219988992, False),
 'strides': None,
 'descr': [('', '<f8')],
 'typestr': '<f8',
 'shape': (256, 4),
 'version': 3}

In [6]: X.base
Out[6]: <Intel(R) USM allocated memory block of 8192 bytes at 0x555869c4d000>

In [7]: X.base._pointer
Out[7]: 93838219988992

sys.stdout.flush()

import dpctl
from dpctl._memory import MemoryUSMShared
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be from dpctl.memory import MemoryUSMShared.

functions_list = [o[0] for o in getmembers(np) if isfunction(o[1]) or isbuiltin(o[1])]
class_list = [o for o in getmembers(np) if isclass(o[1])]

array_interface_property = "__array_interface__"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be __sycl_usm_array_interface__ instead.

nelems = np.prod(shape)
dt = np.dtype(dtype)
isz = dt.itemsize
buf = MemoryUSMShared(nbytes=isz*max(1,nelems))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be

nbytes = int(isz*max(1, nelems))
buf = MemoryUSMShared(nbytes)

dtype=dtype, buffer=buffer,
offset=offset, strides=strides,
order=order)
buf = MemoryUSMShared(nbytes=ar.nbytes)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above, should be

nbytes = int(ar.nbytes)
buf = MemoryUSMShared(nbytes)

# subclass of ndarray, including our own.
if hasattr(obj, array_interface_property):
return
if isinstance(obj, numba.core.runtime._nrt_python._MemInfo):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

numba is undefined.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DrTodd13 Why we use this function from numba? Is it optional? Can we copy this function to our code?

@oleksandr-pavlyk
Copy link
Contributor

oleksandr-pavlyk commented Nov 30, 2020

Method __call__ is handled in __array_ufunc__, but method __reduce__ is not:

In [7]: ms = dpctl.memory.MemoryUSMShared(8*4)

In [8]: X = dpctl.dparray.ndarray((4,), dtype='d', buffer=ms)

In [9]: X
Out[9]:
ndarray([4.64247707e-310, 4.64247698e-310, 9.48606040e-322,
         2.37151510e-322])

In [10]: np.add(X, 1)
Out[10]: ndarray([1., 1., 1., 1.])

In [11]: np.add(X, 1).base
Out[11]: <Intel(R) USM allocated memory block of 32 bytes at 0x5575e19ea000>

In [12]: X.sum()
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-12-16f51ebe4109> in <module>()
----> 1 X.sum()

/home.conda/envs/idp/lib/python3.7/site-packages/numpy/core/_methods.py in _sum(a, axis, dtype, out, keepdims, initial, where)
     36 def _sum(a, axis=None, dtype=None, out=None, keepdims=False,
     37          initial=_NoValue, where=True):
---> 38     return umr_sum(a, axis, dtype, out, keepdims, initial, where)
     39
     40 def _prod(a, axis=None, dtype=None, out=None, keepdims=False,

TypeError: operand type(s) all returned NotImplemented from __array_ufunc__(<ufunc 'add'>, 'reduce', ndarray([4.64247707e-310, 4.64247698e-310, 9.48606040e-322,
         2.37151510e-322]), axis=None, dtype=None, keepdims=False, where=True): 'ndarray'

@PokhodenkoSA
Copy link
Contributor

@oleksandr-pavlyk I will fix all your review and notes tomorrow. This PR is in the core of many further modifications and I would like to merge it tomorrow after fixing and tests adding.

@PokhodenkoSA
Copy link
Contributor

Method __call__ is handled in __array_ufunc__, but method __reduce__ is not:

@oleksandr-pavlyk @DrTodd13 How it could be fixed? Is it ok to create a separate issue for it and fix in separate PR?

@diptorupd diptorupd merged commit 6b892ea into IntelPython:master Dec 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants